2,920 research outputs found

    Affine embeddings and intersections of Cantor sets

    Full text link
    Let E,FβŠ‚RdE, F\subset \R^d be two self-similar sets. Under mild conditions, we show that FF can be C1C^1-embedded into EE if and only if it can be affinely embedded into EE; furthermore if FF can not be affinely embedded into EE, then the Hausdorff dimension of the intersection E∩f(F)E\cap f(F) is strictly less than that of FF for any C1C^1-diffeomorphism ff on Rd\R^d. Under certain circumstances, we prove the logarithmic commensurability between the contraction ratios of EE and FF if FF can be affinely embedded into EE. As an application, we show that dim⁑HE∩f(F)<min⁑{dim⁑HE,dim⁑HF}\dim_HE\cap f(F)<\min\{\dim_HE, \dim_HF\} when EE is any Cantor-pp set and FF any Cantor-qq set, where p,qβ‰₯2p,q\geq 2 are two integers with \log p/\log q\not \in \Q. This is related to a conjecture of Furtenberg about the intersections of Cantor sets.Comment: The paper will appear in J. Math. Pure. App

    Exploiting Style Transfer-based Task Augmentation for Cross-Domain Few-Shot Learning

    Full text link
    In cross-domain few-shot learning, the core issue is that the model trained on source domains struggles to generalize to the target domain, especially when the domain shift is large. Motivated by the observation that the domain shift between training tasks and target tasks usually can reflect in their style variation, we propose Task Augmented Meta-Learning (TAML) to conduct style transfer-based task augmentation to improve the domain generalization ability. Firstly, Multi-task Interpolation (MTI) is introduced to fuse features from multiple tasks with different styles, which makes more diverse styles available. Furthermore, a novel task-augmentation strategy called Multi-Task Style Transfer (MTST) is proposed to perform style transfer on existing tasks to learn discriminative style-independent features. We also introduce a Feature Modulation module (FM) to add random styles and improve generalization of the model. The proposed TAML increases the diversity of styles of training tasks, and contributes to training a model with better domain generalization ability. The effectiveness is demonstrated via theoretical analysis and thorough experiments on two popular cross-domain few-shot benchmarks
    • …
    corecore